AIbase
Product LibraryTool NavigationMCP

Search AI Products and News

  • AI News
  • AI Tools
2025-07-04 08:52:41.AIbase

Open Source DeepSeek R1 Enhanced Version: 200% Improvement in Inference Efficiency, Lower Costs

2025-07-02 17:09:20.AIbase

Foxconn Launches Its First AI Inference Large Model FoxBrain, Trademark Application Submitted

2025-07-02 15:46:07.AIbase

Foxconn's Parent Company Registers a Trademark for an AI Inference Large Model

2025-06-18 09:38:23.AIbase

{title: OpenAI Advances GPT-4.5 API Deprecation Plan, Sparking Strong Reactions in the Developer Community}

2025-06-18 09:17:36.AIbase

A Major Revolution in Large Model Inference! CMU and NVIDIA Collaborate to Launch Multiverse for Ultra-Fast Parallel Generation

2025-06-18 08:51:23.AIbase

Google Releases Powerful AI Model Gemini 2.5 Flash-Lite: Faster Inference and Lower Costs!

2025-06-17 10:42:20.AIbase

Groq Teams Up with Hugging Face to Challenge Cloud Giants: A New Step Forward in AI Inference Speed

2025-06-17 09:10:02.AIbase

MiniMax-M1 Open Source: The World's First Large-Scale Hybrid Architecture Inference Model

2025-06-13 10:27:47.AIbase

AMD and OpenAI Launch a Powerful AI Chip: A 35x Increase in Inference Performance

2025-06-13 09:18:05.AIbase

Mistral AI Teams Up with NVIDIA to Build Sovereign AI Infrastructure and Top-tier Inference Models

2025-06-11 09:07:38.AIbase

French AI Lab Mistral Releases New Inference Model Magistral Small Version Now Available for Download

2025-06-05 14:31:04.AIbase

Internet Queen's AI Trend Report: High Cost of AI Model Training but Plunging Inference Costs

2025-06-05 14:23:05.AIbase

Mary Meeker's latest report: AI training costs逼近百亿美元, inference costs plummet by 99%

2025-06-03 14:25:34.AIbase

NVIDIA, MIT, and The University of Hong Kong Team Up to Launch Fast-dLLM Framework, Inference Speed Boosts Remarkably

2025-06-03 13:46:44.AIbase

NVIDIA and MIT Collaborate to Launch Fast-dLLM Framework, Boosting AI Inference Speed by 27.6 Times

2025-06-03 10:41:04.AIbase

Cerebras Inference API Fully Opened: Developers Receive One Million Free Tokens Daily

2025-05-27 11:58:08.AIbase

Red Hat Joins Hands with Google and NVIDIA to Launch LLM-D Open Source Project to Solve Dual Challenges of Large-Scale AI Inference Cost and Latency

2025-05-22 15:31:03.AIbase

Huawei FlashComm Technology Boosts Inference Speed of Large Models by 80%

2025-05-22 15:21:35.AIbase

Red Hat Releases New AI Inference Server to Drive Intelligence Development in Hybrid Cloud Environments

2025-05-22 11:54:14.AIbase

Silicon Base Flow Updates DeepSeek-R1 and Other Inference Model APIs to Support a 128K Context Length